Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Brain Stimul ; 16(3): 840-853, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37201865

RESUMO

The objective and scope of this Limited Output Transcranial Electrical Stimulation 2023 (LOTES-2023) guidance is to update the previous LOTES-2017 guidance. These documents should therefore be considered together. The LOTES provides a clearly articulated and transparent framework for the design of devices providing limited output (specified low-intensity range) transcranial electrical stimulation for a variety of intended uses. These guidelines can inform trial design and regulatory decisions, but most directly inform manufacturer activities - and hence were presented in LOTES-2017 as "Voluntary industry standard for compliance controlled limited output tES devices". In LOTES-2023 we emphasize that these standards are largely aligned across international standards and national regulations (including those in USA, EU, and South Korea), and so might be better understood as "Industry standards for compliance controlled limited output tES devices". LOTES-2023 is therefore updated to reflect a consensus among emerging international standards, as well as best available scientific evidence. "Warnings" and "Precautions" are updated to align with current biomedical evidence and applications. LOTES standards applied to a constrained device dose range, but within this dose range and for different use-cases, manufacturers are responsible to conduct device-specific risk management.


Assuntos
Estimulação Transcraniana por Corrente Contínua , Gestão de Riscos
2.
Neural Comput ; 26(8): 1763-809, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24877728

RESUMO

Hierarchical temporal memory (HTM) is a biologically inspired framework that can be used to learn invariant representations of patterns in a wide range of applications. Classical HTM learning is mainly unsupervised, and once training is completed, the network structure is frozen, thus making further training (i.e., incremental learning) quite critical. In this letter, we develop a novel technique for HTM (incremental) supervised learning based on gradient descent error minimization. We prove that error backpropagation can be naturally and elegantly implemented through native HTM message passing based on belief propagation. Our experimental results demonstrate that a two-stage training approach composed of unsupervised pretraining and supervised refinement is very effective (both accurate and efficient). This is in line with recent findings on other deep architectures.


Assuntos
Memória , Redes Neurais de Computação , Algoritmos , Teoria da Informação , Reconhecimento Automatizado de Padrão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...